49 research outputs found

    Genetic algorithm dynamics on a rugged landscape

    Full text link
    The genetic algorithm is an optimization procedure motivated by biological evolution and is successfully applied to optimization problems in different areas. A statistical mechanics model for its dynamics is proposed based on the parent-child fitness correlation of the genetic operators, making it applicable to general fitness landscapes. It is compared to a recent model based on a maximum entropy ansatz. Finally it is applied to modeling the dynamics of a genetic algorithm on the rugged fitness landscape of the NK model.Comment: 10 pages RevTeX, 4 figures PostScrip

    Constraint-satisfaction problems

    No full text

    Annealing linear scalarized based multi-objective multi-armed bandit algorithm

    No full text
    A stochastic multi-objective multi-armed bandit problem is a particular type of multi-objective (MO) optimization problems where the goal is to find and play fairly the optimal arms. To solve the multi-objective optimization problem, we propose annealing linear scalarized algorithm that transforms the MO optimization problem into a single one by using a linear scalarization function, and finds and plays fairly the optimal arms by using a decaying parameter ϵt. We compare empirically linear scalarized-UCB1 algorithm with the annealing linear scalarized algorithm on a test suit of multi-objective multi-armed bandit problems with independent Bernoulli distributions using different approaches to define weight sets. We used the standard approach, the adaptive approach and the genetic approach. We conclude that the performance of the annealing scalarized and the scalarized UCB1 algorithms depend on the used weight approach

    Pareto upper confidence bounds algorithms: an empirical study

    No full text
    Many real-world stochastic environments are inherently multi-objective environments with conflicting objectives. The multi-objective multi-armed bandits (MOMAB) are extensions of the classical, i.e. single objective, multi-armed bandits to reward vectors and multi-objective optimisation techniques are often required to design mechanisms with an efficient exploration / exploitation trade-off. In this paper, we propose the improved Pareto Upper Confidence Bound (iPUCB) algorithm that straightforwardly extends the single objective improved UCB algorithm to reward vectors by deleting the suboptimal arms. The goal of the improved Pareto UCB algorithm, i.e. iPUCB, is to identify the set of best arms, or the Pareto front, in a fixed budget of arm pulls. We experimentally compare the performance of the proposed Pareto upper confidence bound algorithm with the Pareto UCB1 algorithm and the Hoeffding race on a bi-objective example coming from an industrial control applications, i.e. the engagement of wet clutches. We propose a new regret metric based on the Kullback-Leibler divergence to measure the performance of a multi-objective multi-armed bandit algorithm. We show that iPUCB outperforms the other two tested algorithms on the given multi-objective environment

    GAMOT: AN EFFICIENT GENETIC ALGORITHM FOR FINDING CHALLENGING MOTIFS IN DNA SEQUENCES

    No full text
    Weak signals that mark transcription factor binding sites involved in gene regulation are considered to be challenging motifs. Identifying these motifs in unaligned DNA sequences is a computationally hard problem which requires efficient algorithms. Genetic Algorithms (GA), inspired from evolution in nature, are a class of stochastic search algorithms which have been applied successfully to many computationally hard problems, including regulatory site prediction. In this paper, we propose GAMOT, an efficient GA for solving Planted (l, d)-Motif Problems as introduced by Pevzner and Sze. We show empirically that our algorithm is not only able to solve the challenging problem instances with short motifs such as (14,4) and (15,4) efficiently but also that it is able to solve problems with longer motifs such as (20,7), (30,11) and (40,15). GAMOT can find the planted motifs in near-linear computational time thanks to an additional step which creates a highly fit population of solutions even before the evolutionary process is applied. We present a comparison of our results with some of the state-of-the-art algorithms such as VAS and PROJECTION. 1

    Knowledge gradient exploration in online kernel-based LSPI

    No full text
    We introduce online kernel-based LSPI (or least squares policy iteration) which combines feature of online LSPI and offline kernel-based LSPI. The knowledge gradient is used as exploration policy in both online LSPI and online kernel-based LSPI in order to compare their performance on 2 discrete Markov decision problems. Automatic feature selection in online kernel-based LSPI, which is a result of the approximate linear dependency based kernel sparsification, improves the performance when compared to online LSPI

    BNAIC 2005, The 17th Belgian-Dutch Conference on Artificial Intelligence, Katja Verbeeck, Karl Tuyls, Ann Nowe,

    No full text
    Genetic programming (GP) is used to evolve global optimisation test problems. These automatically generated performance metrics are used to show strengths and weaknesses of Particle Swarm Optimization (PSO) and Di#erential Evolution (DE). Knowledge gained will help when choosing maximisers (and their tuning parameters) and in research into new search tools (which might include hyperheuristics)
    corecore